Personnel
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Vocabularies, Semantic Web and Linked Data based Knowledge Representation

Semantic Web for Biodiversity

Participants : Franck Michel, Catherine Faron Zucker.

As a continuation of the work initiated with the Muséum National d'Histoire Naturelle of Paris during the last two years, we have proposed a model to represent taxonomic and nomenclatural information as Linked Data, and we published the french taxonomic register on the Web along this model (http://taxref.mnhn.fr/lod/Dataset/10.0/). Furthermore, we are now leveraging this work to develop an activity related to biodiversity data sharing and integration: we presented the model and dataset in a workshop of the ISWC conference [38] as well as at the TDWG conference on biodiversity information standards [37]. We are in the process of publishing this data set on AgroPortal (http://agroportal.lirmm.fr/), the bioportal-based ontology repository for agronomy and agriculture. We are involved in the Bioschemas.org W3C community group with the objective of fostering the definition and adoption of common biodiversity-related markup.

RDF Modelization of Education Resources

Participants : Geraud Fokou Pelap, Catherine Faron Zucker, Fabien Gandon, Olivier Corby.

EduMICS (Educative Models Interactions Communities with Semantics) is a joint laboratory (LabCom, 2016- 2018) between the Wimmics team and the Educlever company. Adaptive Learning, Social Learning and Linked Open Data and links between them are at the core of this LabCom. The purpose of EduMICS is both to develop research and technologies with the ultimate goal to adapt educational progressions and pedagogical resource recommendation to learner profiles.

This year, we propose a novel RDF modelisation for Educlever's Data. Once this novel modelisation have been validated, we built a migration tool to migrate Educlever data from the old model into the new one. We also performed benchmarking of the new model with Educlever's uses cases queries. In order to perform this benchmark we store RDF data in four triplestore, Corese, Allegrograph, GraphDB and Virtuoso. Next steps of the project are: perform test in real exploitation environnement, find a way to use graph database in RDF context and how to perform reasonning to recommend and adapt activities to learn. Topics covered by EduMICS include: ontology-based modeling of educational resources; ontology-based integration of heterogenous data sources; ontology-based reasoning; semantic analysis of a social network of learners; pedagogical resource recommendation adpated to learner profiles.

Intelliquiz Project

Participants : Oscar Rodríguez Rocha, Catherine Faron Zucker.

Intelliquiz is a research project carried out in collaboration with Qwant. The main goal of this project is to create a smart quiz game engine, able to:

  1. generate credible alternative answers from a given set of questions and answers,

  2. generate a multiple choice questions game based on specific subjects (initially by exploiting the non-structured dataset of the famous French multiple choice questions game "Les Incollables"),

  3. generate a set of multiple choice questions and answers (quiz) about a specific subject, to be proposed to a user (a learner / player)

  4. adapt the resulting quiz to the user's profile, context and past experience,

  5. set up the fundamentals of an intelligent platform for education.

This work was published at EDULEARN [32].

Reconciling DBpedia Chapters

Participants : Serena Villata, Elena Cabrio, Fabien Gandon.

Together with Alessio Palmero Aprosio (FBK, Italy), we addressed the issue of reconciling information obtained by querying the SPARQL endpoints of language-specific DBpedia chapters. DBpedia is a RDF triple store whose content is automatically created by extracting information from Wikipedia. Starting from a categorization of the possible relations among the resulting instances, we provide a framework to: (i) classify such relations, (ii) reconcile information using argumentation theory, (iii) rank the alternative results depending on the confidence of the source in case of inconsistencies, and (iv) explain the reasons underlying the proposed ranking. We release the resource obtained applying our framework to a set of language-specific DBpedia chapters, and integrate such framework in the Question Answering system QAKiS, that exploits such chapters as RDF datasets to be queried using a natural language interface. The results of this research have been published in the Semantic Web Journal [25].

Ontological Representation of Normative Requirements

Participants : Serena Villata, Elena Cabrio, Fabien Gandon.

Together with Guido Governatori (Data61, Australia), we have proposed a proof of concept for the ontological representation of normative requirements as Linked Data on the Web. Starting from the LegalRuleML ontology, we present an extension of this ontology to model normative requirements and rules. Furthermore, we define an operational formalization of the deontic reasoning over these concepts on top of the Semantic Web languages. The results of this research have been published at the JURIX 2017 conference [52].

LDScript Linked Data Script Language

Participants : Olivier Corby, Catherine Faron Zucker, Fabien Gandon.

LDScript is a script language for the Semantic Web of Linked Data built on top of SPARQL filter language. It enables users to define extension functions for SPARQL queries in a language that is highly compatible with SPARQL. This year we generalized and uniformized the design of the language with query in function, second order functions (map, funcall, apply, reduce), lambda expressions, return statement, method call. We also introduced literal extension datatypes to manage list, triple, graph and query solution mapping as RDF extended literals. This work was published at ISWC and listed as spotlight paper in the conference program [32].

SHACL Validator

Participant : Olivier Corby.

In the context of the SHACL W3C working group to design a Shapes Constraint Language for validating RDF graphs, we have written a SHACL validator using two languages developed in the team: STTL, SPARQL Template Transformation Language, and LDScript. An online demo server have been set up (http://corese.inria.fr).

HAL Open Data

Participant : Olivier Corby.

The HAL open archive is provided with an Open Data SPARQL endpoint (http://data.archives-ouvertes.fr/sparql). We participated, with CNRS CCSD team (Center for Direct Scientific Communication), to the design of the RDF Schema of the open data server and we developed a Linked Data Hypertext Navigator (http://corese.inria.fr) on top of the HAL open data server.

Graph Database for the Semantic Web

Participants : Erwan Demairy, Olivier Corby.

In the context of an Inria two years grant, we conducted in collaboration with Johan Montagnat (I3S, CNRS) a study of graph databases (Orient DB, Titan DB, Neo4j) and of the TinkerPop abstract query language. The purpose of this study was to design a mapping between RDF statements and graph databases and conversely. In addition, we designed a mapping of SPARQL query patterns to TinkerPop. We have implemented with Corese a binding of a generic SPARQL interpreter on top of TinkerPop that enables us to query a RDF oriented graph database with SPARQL.

Mobile Linked Data Sharing in Technologically Constrained Environment

Participants : Fabien Gandon, Mahamadou Toure.

During the second year of the MoReWAIS project we finalized the state of the art report [61]. In addition to the initial points of the survey, namely caching of client-side data and cache federation, querying and sharing of data, Linked Open Data and data privacy, we have identified and added an important point for this state of the art, which is the domain of collaborative RDF graph modification. This last point raises the question of identifying the mechanisms proposed in the literature that make it possible to solve the constraints related to the availability, the processing loads balance and sources and data reliability in decentralized peer-to-peer architectures. These mechanisms make it possible to replicate, share and collaboratively modify a graph. We also worked on an architecture model (network and data) and opted for a three-tier architecture with a data model based essentially on graph replication and cooperative caching.